489 research outputs found

    Bounds on the entanglement of two-qutrit systems from fixed marginals

    Get PDF
    We discuss the problem of characterizing upper bounds on entanglement in a bipartite quantum system when only the reduced density matrices (marginals) are known. In particular, starting from the known two-qubit case, we propose a family of candidates for maximally entangled mixed states with respect to fixed marginals for two qutrits. These states are extremal in the convex set of two-qutrit states with fixed marginals. Moreover, it is shown that they are always quasidistillable. As a by-product we prove that any maximally correlated state that is quasidistillable must be pure. Our observations for two qutrits are supported by numerical analysis

    Approaches to the Estimation of the Local Average Treatment Effect in a Regression Discontinuity Design

    Get PDF
    Regression discontinuity designs (RD designs) are used as a method for causal inference from observational data, where the decision to apply an intervention is made according to a ‘decision rule’ that is linked to some continuous variable. Such designs are being increasingly developed in medicine. The local average treatment effect (LATE) has been established as an estimator of the intervention effect in an RD design, particularly where a design’s ‘decision rule’ is not adhered to strictly. Estimating the variance of the LATE is not necessarily straightforward. We consider three approaches to the estimation of the LATE: two-stage least squares, likelihood-based and a Bayesian approach. We compare these under a variety of simulated RD designs and a real example concerning the prescription of statins based on cardiovascular disease risk score

    Value of Information: A Tool to Improve Research Prioritization and Reduce Waste

    Get PDF
    In a Guest Editorial, Cosetta Minelli and Gianluca Baio explain how VOI analysis can prioritize research projects by identifying uncertainty in existing knowledge and then estimating expected benefits from reducing that uncertainty

    Probabilistic Sensitivity Analysis in Health Economics

    Get PDF
    Health economic evaluations have recently built upon more advanced statistical decision-theoretic foundations and nowadays it is officially required that uncertainty about both parameters and observable variables be taken into account thoroughly, increasingly often by means of Bayesian methods. Among these, Probabilistic Sensitivity Analysis (PSA) has assumed a predominant role and Cost Effectiveness Acceptability Curves (CEACs) are established as the most important tool. The objective of this paper is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. We advocate here the use of an integrated vision that is based on the value of information analysis, a procedure that is well grounded in the theory of decision under uncertainty, and criticise the indiscriminate use of other approaches to sensitivity analysis

    Value of Information: A Tool to Improve Research Prioritization and Reduce Waste.

    Get PDF
    In a Guest Editorial, Cosetta Minelli and Gianluca Baio explain how VOI analysis can prioritize research projects by identifying uncertainty in existing knowledge and then estimating expected benefits from reducing that uncertainty

    Bayesian regression discontinuity designs: Incorporating clinical knowledge in the causal analysis of primary care data

    Get PDF
    The regression discontinuity (RD) design is a quasi-experimental design that estimates the causal effects of a treatment by exploiting naturally occurring treatment rules. It can be applied in any context where a particular treatment or intervention is administered according to a pre-specified rule linked to a continuous variable. Such thresholds are common in primary care drug prescription where the RD design can be used to estimate the causal effect of medication in the general population. Such results can then be contrasted to those obtained from randomised controlled trials (RCTs) and inform prescription policy and guidelines based on a more realistic and less expensive context. In this paper we focus on statins, a class of cholesterol-lowering drugs, however, the methodology can be applied to many other drugs provided these are prescribed in accordance to pre-determined guidelines. NHS guidelines state that statins should be prescribed to patients with 10 year cardiovascular disease risk scores in excess of 20%. If we consider patients whose scores are close to this threshold we find that there is an element of random variation in both the risk score itself and its measurement. We can thus consider the threshold a randomising device assigning the prescription to units just above the threshold and withholds it from those just below. Thus we are effectively replicating the conditions of an RCT in the area around the threshold, removing or at least mitigating confounding. We frame the RD design in the language of conditional independence which clarifies the assumptions necessary to apply it to data, and which makes the links with instrumental variables clear. We also have context specific knowledge about the expected sizes of the effects of statin prescription and are thus able to incorporate this into Bayesian models by formulating informative priors on our causal parameters.Comment: 21 pages, 5 figures, 2 table

    Shrinkage Bayesian Causal Forests for Heterogeneous Treatment Effects Estimation

    Get PDF
    This article develops a sparsity-inducing version of Bayesian Causal Forests, a recently proposed nonparametric causal regression model that employs Bayesian Additive Regression Trees and is specifically designed to estimate heterogeneous treatment effects using observational data. The sparsity-inducing component we introduce is motivated by empirical studies where not all the available covariates are relevant, leading to different degrees of sparsity underlying the surfaces of interest in the estimation of individual treatment effects. The extended version presented in this work, which we name Shrinkage Bayesian Causal Forest, is equipped with an additional pair of priors allowing the model to adjust the weight of each covariate through the corresponding number of splits in the tree ensemble. These priors improve the model’s adaptability to sparse data generating processes and allow to perform fully Bayesian feature shrinkage in a framework for treatment effects estimation, and thus to uncover the moderating factors driving heterogeneity. In addition, the method allows prior knowledge about the relevant confounding covariates and the relative magnitude of their impact on the outcome to be incorporated in the model. We illustrate the performance of our method in simulated studies, in comparison to Bayesian Causal Forest and other state-of-the-art models, to demonstrate how it scales up with an increasing number of covariates and how it handles strongly confounded scenarios. Finally, we also provide an example of application using real-world data. Supplementary materials for this article are available online

    Variable Selection for Covariate Dependent Dirichlet Process Mixtures of Regressions

    Get PDF
    Dirichlet Process Mixture (DPM) models have been increasingly employed to specify random partition models that take into account possible patterns within the covariates. Furthermore, in response to large numbers of covariates, methods for selecting the most important covariates have been proposed. Commonly, the covariates are chosen either for their importance in determining the clustering of the observations or for their effect on the level of a response variable (in case a regression model is specified). Typically both strategies involve the specification of latent indicators that regulate the inclusion of the covariates in the model. Common examples involve the use of spike and slab prior distributions. In this work we review the most relevant DPM models that include the covariate information in the induced partition of the observations and we focus extensively on available variable selection techniques for these models. We highlight the main features of each model and demonstrate them in simulations

    The Effects of Model Misspecification in Unanchored Matching-Adjusted Indirect Comparison (MAIC): Results of a Simulation Study

    Get PDF
    OBJECTIVES: To assess the performance of unanchored matching-adjusted indirect comparison (MAIC) by matching on first moments or higher moments in a cross-study comparisons under a variety of conditions. A secondary objective was to gauge the performance of the method relative to propensity score weighting (PSW). METHODS: A simulation study was designed based on an oncology example, where MAIC was used to account for differences between a contemporary trial in which patients had more favorable characteristics and a historical control. A variety of scenarios were then tested varying the setup of the simulation study, including violating the implicit or explicit assumptions of MAIC. RESULTS: Under ideal conditions and under a variety of scenarios, MAIC performed well (shown by a low mean absolute error [MAE]) and was unbiased (shown by a mean error [ME] of about zero). The performance of the method deteriorated where the matched characteristics had low explanatory power or there was poor overlap between studies. Only when important characteristics are not included in the matching did the method become biased (nonzero ME). Where the method showed poor performance, this was exaggerated if matching was also performed on the variance (ie, higher moments). Relative to PSW, MAIC provided similar results in most circumstances, although it exhibited slightly higher MAE and a higher chance of exaggerating bias. CONCLUSIONS: MAIC appears well suited to adjust for cross-trial comparisons provided the assumptions underpinning the model are met, with relatively little efficiency loss compared with PSW

    Estimating Individual Treatment Effects using Non-Parametric Regression Models: a Review

    Get PDF
    Large observational data are increasingly available in disciplines such as health, economic and social sciences, where researchers are interested in causal questions rather than prediction. In this paper, we investigate the problem of estimating heterogeneous treatment effects using non-parametric regression-based methods. Firstly, we introduce the setup and the issues related to conducting causal inference with observational or non-fully randomized data, and how these issues can be tackled with the help of statistical learning tools. Then, we provide a review of state-of-the-art methods, with a particular focus on non-parametric modeling, and we cast them under a unifying taxonomy. After presenting a brief overview on the problem of model selection, we illustrate the performance of some of the methods on three different simulated studies and on a real world example to investigate the effect of participation in school meal programs on health indicators
    • …
    corecore